Optimal Rates for the Regularized Learning Algorithms under General Source Condition

نویسندگان
چکیده

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Optimal Rates for the Regularized Learning Algorithms under General Source Condition

We consider the learning algorithms under general source condition with the polynomial decay of the eigenvalues of the integral operator in vector-valued function setting. We discuss the upper convergence rates of Tikhonov regularizer under general source condition corresponding to increasing monotone index function. The convergence issues are studied for general regularization schemes by using...

متن کامل

Optimal results for a time-fractional inverse diffusion problem under the Hölder type source condition

‎In the present paper we consider a time-fractional inverse diffusion problem‎, ‎where data is given at $x=1$ and the solution is required in the interval $0

متن کامل

ar X iv : 1 61 1 . 01 90 0 v 1 [ st at . M L ] 7 N ov 2 01 6 Optimal rates for the regularized learning algorithms under general source condition

We consider the learning algorithms under general source condition with the polynomial decay of the eigenvalues of the integral operator in vector-valued function setting. We discuss the upper convergence rates of Tikhonov regularizer under general source condition corresponding to increasing monotone index function. The convergence issues are studied for general regularization schemes by using...

متن کامل

Optimal rates for stochastic convex optimization under Tsybakov noise condition

We focus on the problem of minimizing a convex function f over a convex set S given T queries to a stochastic first order oracle. We argue that the complexity of convex minimization is only determined by the rate of growth of the function around its minimizer xf,S , as quantified by a Tsybakov-like noise condition. Specifically, we prove that if f grows at least as fast as ‖x − xf,S‖ around its...

متن کامل

Error Estimates for Multi-Penalty Regularization under General Source Condition

In learning theory, the convergence issues of the regression problem are investigated with the least square Tikhonov regularization schemes in both the RKHS-norm and the L -norm. We consider the multi-penalized least square regularization scheme under the general source condition with the polynomial decay of the eigenvalues of the integral operator. One of the motivation for this work is to dis...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Frontiers in Applied Mathematics and Statistics

سال: 2017

ISSN: 2297-4687

DOI: 10.3389/fams.2017.00003